var bibbase_data = {"data":"\"Loading..\"\n\n
\n\n \n\n \n\n \n \n\n \n\n \n \n\n \n\n \n
\n generated by\n \n \"bibbase.org\"\n\n \n
\n \n\n
\n\n \n\n\n
\n\n Excellent! Next you can\n create a new website with this list, or\n embed it in an existing web page by copying & pasting\n any of the following snippets.\n\n
\n JavaScript\n (easiest)\n
\n \n <script src=\"https://bibbase.org/show?bib=https%3A%2F%2Fbibbase.org%2Fnetwork%2Ffiles%2Fe2kjGxYgtBo8SWSbC&authorFirst=1&nocache=1&fullnames=1&theme=bullets&group0=year&group1=type&owner={}&filter=tags:PFL&jsonp=1&jsonp=1\"></script>\n \n
\n\n PHP\n
\n \n <?php\n $contents = file_get_contents(\"https://bibbase.org/show?bib=https%3A%2F%2Fbibbase.org%2Fnetwork%2Ffiles%2Fe2kjGxYgtBo8SWSbC&authorFirst=1&nocache=1&fullnames=1&theme=bullets&group0=year&group1=type&owner={}&filter=tags:PFL&jsonp=1\");\n print_r($contents);\n ?>\n \n
\n\n iFrame\n (not recommended)\n
\n \n <iframe src=\"https://bibbase.org/show?bib=https%3A%2F%2Fbibbase.org%2Fnetwork%2Ffiles%2Fe2kjGxYgtBo8SWSbC&authorFirst=1&nocache=1&fullnames=1&theme=bullets&group0=year&group1=type&owner={}&filter=tags:PFL&jsonp=1\"></iframe>\n \n
\n\n

\n For more details see the documention.\n

\n
\n
\n\n
\n\n This is a preview! To use this list on your own web site\n or create a new web site from it,\n create a free account. The file will be added\n and you will be able to edit it in the File Manager.\n We will show you instructions once you've created your account.\n
\n\n
\n\n

To the site owner:

\n\n

Action required! Mendeley is changing its\n API. In order to keep using Mendeley with BibBase past April\n 14th, you need to:\n

    \n
  1. renew the authorization for BibBase on Mendeley, and
  2. \n
  3. update the BibBase URL\n in your page the same way you did when you initially set up\n this page.\n
  4. \n
\n

\n\n

\n \n \n Fix it now\n

\n
\n\n
\n\n\n
\n \n \n
\n
\n  \n 2023\n \n \n (2)\n \n \n
\n
\n \n \n
\n
\n  \n 1\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n Antonious M. Girgis; and Suhas N. Diggavi.\n\n\n \n \n \n \n \n Multi-Message Shuffled Privacy in Federated Learning.\n \n \n \n \n\n\n \n\n\n\n CoRR, abs/2302.11152. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"Multi-MessagePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n  \n \n 12 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{DBLP:journals/corr/abs-2302-11152,\n  author       = {Antonious M. Girgis and\n                  Suhas N. Diggavi},\n  title        = {Multi-Message Shuffled Privacy in Federated Learning},\n  journal      = {CoRR},\n  volume       = {abs/2302.11152},\n  year         = {2023},\n  url          = {https://doi.org/10.48550/arXiv.2302.11152},\n  doi          = {10.48550/arXiv.2302.11152},\n  eprinttype    = {arXiv},\n  eprint       = {2302.11152},\n  type = {1},\n  tags = {journalSub,PFL},\n}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 4\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n Kaan Ozkara; Antonious M Girgis; Deepesh Data; and Suhas Diggavi.\n\n\n \n \n \n \n \n A Generative Framework for Personalized Learning and Estimation: Theory, Algorithms, and Privacy.\n \n \n \n \n\n\n \n\n\n\n in International Conference on Learning Representations (ICLR). 2023.\n \n\n\n\n
\n\n\n\n \n \n \"A arxiv\n  \n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 22 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{ozkara2022generative,\n  title={A Generative Framework for Personalized Learning and Estimation: Theory, Algorithms, and Privacy},\n  author={Ozkara, Kaan and Girgis, Antonious M and Data, Deepesh and Diggavi, Suhas},\n  journal={in International Conference on Learning Representations (ICLR)},\n  year={2023},\n  tags={conf,PFL},\n  type={4},\n  url_arxiv={https://arxiv.org/abs/2207.01771},\n  url={https://openreview.net/forum?id=FUiDMCr_W4o},\n  abstract={A distinguishing characteristic of federated learning is that the (local) client data could have statistical heterogeneity. This heterogeneity has motivated the design of personalized learning, where individual (personalized) models are trained, through collaboration. There have been various personalization methods proposed in literature, with seemingly very different forms and methods ranging from use of a single global model for local regularization and model interpolation, to use of multiple global models for personalized clustering, etc. In this work, we begin with a generative framework that could potentially unify several different algorithms as well as suggest new algorithms. We apply our generative framework to personalized estimation, and connect it to the classical empirical Bayes' methodology. We develop private personalized estimation under this framework. We then use our generative framework for learning, which unifies several known personalized FL algorithms and also suggests new ones; we propose and study a new algorithm AdaPeD based on a Knowledge Distillation, which numerically outperforms several known algorithms. We also develop privacy for personalized learning methods with guarantees for user-level privacy and composition. We numerically evaluate the performance as well as the privacy for both the estimation and learning problems, demonstrating the advantages of our proposed methods.},\n}\n\n
\n
\n\n\n
\n A distinguishing characteristic of federated learning is that the (local) client data could have statistical heterogeneity. This heterogeneity has motivated the design of personalized learning, where individual (personalized) models are trained, through collaboration. There have been various personalization methods proposed in literature, with seemingly very different forms and methods ranging from use of a single global model for local regularization and model interpolation, to use of multiple global models for personalized clustering, etc. In this work, we begin with a generative framework that could potentially unify several different algorithms as well as suggest new algorithms. We apply our generative framework to personalized estimation, and connect it to the classical empirical Bayes' methodology. We develop private personalized estimation under this framework. We then use our generative framework for learning, which unifies several known personalized FL algorithms and also suggests new ones; we propose and study a new algorithm AdaPeD based on a Knowledge Distillation, which numerically outperforms several known algorithms. We also develop privacy for personalized learning methods with guarantees for user-level privacy and composition. We numerically evaluate the performance as well as the privacy for both the estimation and learning problems, demonstrating the advantages of our proposed methods.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n
\n
\n  \n 2021\n \n \n (1)\n \n \n
\n
\n \n \n
\n
\n  \n 4\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n Kaan Ozkara; Navjot Singh; Deepesh Data; and Suhas Diggavi.\n\n\n \n \n \n \n \n QuPeD: Quantized Personalization via Distillation with Applications to Federated Learning.\n \n \n \n \n\n\n \n\n\n\n Advances in Neural Information Processing Systems, 34. 2021.\n \n\n\n\n
\n\n\n\n \n \n \"QuPeD: arxiv\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 8 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{ozkara2021quped,\n  title={QuPeD: Quantized Personalization via Distillation with Applications to Federated Learning},\n  author={Ozkara, Kaan and Singh, Navjot and Data, Deepesh and Diggavi, Suhas},\n  journal={Advances in Neural Information Processing Systems},\n  volume={34},\n  year={2021},\n  tags = {conf,CEDL,DML,PFL},\n  url_arxiv = {https://arxiv.org/abs/2107.13892},\n  type = {4},\n  abstract={Traditionally, federated learning (FL) aims to train a single global model while collaboratively using multiple clients and a server. Two natural challenges that FL algorithms face are heterogeneity in data across clients and collaboration of clients with diverse resources. In this work, we introduce a quantized and personalized FL algorithm QuPeD that facilitates collective (personalized model compression) training via knowledge distillation (KD) among clients who have access to heterogeneous data and resources. For personalization, we allow clients to learn compressed personalized models with different quantization parameters and model dimensions/structures. Towards this, first we propose an algorithm for learning quantized models through a relaxed optimization problem, where quantization values are also optimized over. When each client participating in the (federated) learning process has different requirements for the compressed model (both in model dimension and precision), we formulate a compressed personalization framework by introducing knowledge distillation loss for local client objectives collaborating through a global model. We develop an alternating proximal gradient update for solving this compressed personalization problem, and analyze its convergence properties. Numerically, we validate that QuPeD outperforms competing personalized FL methods, FedAvg, and local training of clients in various heterogeneous settings.},\n}\n\n
\n
\n\n\n
\n Traditionally, federated learning (FL) aims to train a single global model while collaboratively using multiple clients and a server. Two natural challenges that FL algorithms face are heterogeneity in data across clients and collaboration of clients with diverse resources. In this work, we introduce a quantized and personalized FL algorithm QuPeD that facilitates collective (personalized model compression) training via knowledge distillation (KD) among clients who have access to heterogeneous data and resources. For personalization, we allow clients to learn compressed personalized models with different quantization parameters and model dimensions/structures. Towards this, first we propose an algorithm for learning quantized models through a relaxed optimization problem, where quantization values are also optimized over. When each client participating in the (federated) learning process has different requirements for the compressed model (both in model dimension and precision), we formulate a compressed personalization framework by introducing knowledge distillation loss for local client objectives collaborating through a global model. We develop an alternating proximal gradient update for solving this compressed personalization problem, and analyze its convergence properties. Numerically, we validate that QuPeD outperforms competing personalized FL methods, FedAvg, and local training of clients in various heterogeneous settings.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n
\n\n\n\n\n
\n\n\n \n\n \n \n \n \n\n
\n"}; document.write(bibbase_data.data);